Nonparametric learning and Regularization

ثبت نشده
چکیده

Several nonparametric methods in a regression model are presented. First, the most classical ones: piecewise polynomial estimators, estimation with Spline bases, kernel estimators and projection estimators on orthonormal bases (such as Fourier or wavelet bases). Since these methods suffer from the curse of dimensionality, we also present Generalized Additive Models and CART regression models. The main references for this course are the following books : • The elements of Statistical Learning by T. Hastie et al [2]. • Introduction to nonparametric statistics (2009) by A. Tsybakov [4] • Introduction to High-Dimensional Statistics by C. Giraud [1] • Concentration inequalities and model selection by P. Massart [3]

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Nonparametric Bayesian Multi-task Learning with Max-margin Posterior Regularization

Learning a common latent representation can capture the relationships and share statistic strength among multiple tasks. To automatically resolve the unknown dimensionality of the latent representation, nonparametric Bayesian methods have been successfully developed with a generative process describing the observed data. In this paper, we present a discriminative approach to learning nonparamet...

متن کامل

Statistical Likelihood Representations of Prior Knowledge in Machine Learning

We show that maximum a posteriori (MAP) statistical methods can be used in nonparametric machine learning problems in the same way as their current applications in parametric statistical problems, and give some examples of applications. This MAPN (MAP for nonparametric machine learning) paradigm can also reproduce much more transparently the same results as regularization methods in machine lea...

متن کامل

Iterative Regularization for Learning with Convex Loss Functions

We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framewo...

متن کامل

Max-Margin Nonparametric Latent Feature Models for Link Prediction

Link prediction is a fundamental task in statistical network analysis. Recent advances have been made on learning flexible nonparametric Bayesian latent feature models for link prediction. In this paper, we present a max-margin learning method for such nonparametric latent feature relational models. Our approach attempts to unite the ideas of max-margin learning and Bayesian nonparametrics to d...

متن کامل

Regularized Policy Iteration with Nonparametric Function Spaces

We study two regularization-based approximate policy iteration algorithms, namely REGLSPI and REG-BRM, to solve reinforcement learning and planning problems in discounted Markov Decision Processes with large state and finite action spaces. The core of these algorithms are the regularized extensions of the Least-Squares Temporal Difference (LSTD) learning and Bellman Residual Minimization (BRM),...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017